Acceleration Techniques for Approximating the Matrix Exponential Prof.ssa Valeria Simoncini
نویسندگان
چکیده
Acknowledgements I would like to express my deep and sincere gratitude to my supervisor, Professor Valeria Simoncini from the University of Bologna; her guidance has been of great value in this study. I am deeply grateful to Professor Luciano Lopez from the University of Bari for his important support throughout this work and his paternal advices. My sincere thanks to Professor Tiziano Politi, Cinzia Elia and Alessandro Pugliese. I owe my loving thanks to my husband Domenico, my parents and my sister.
منابع مشابه
Acceleration Techniques for Approximating the Matrix Exponential Operator
In this paper we investigate some well established and more recent methods that aim at approximating the vector exp(A)v when A is a large symmetric negative semidefinite matrix, by efficiently combining subspace projections and spectral transformations. We show that some recently developed acceleration procedures may be restated as preconditioning techniques for the partial fraction expansion f...
متن کاملThe Structure of Bhattacharyya Matrix in Natural Exponential Family and Its Role in Approximating the Variance of a Statistics
In most situations the best estimator of a function of the parameter exists, but sometimes it has a complex form and we cannot compute its variance explicitly. Therefore, a lower bound for the variance of an estimator is one of the fundamentals in the estimation theory, because it gives us an idea about the accuracy of an estimator. It is well-known in statistical inference that the Cram&eac...
متن کاملApproximating the leading singular triplets of a large matrix function
Given a large square matrix A and a sufficiently regular function f so that f(A) is well defined, we are interested in the approximation of the leading singular values and corresponding singular vectors of f(A), and in particular of ‖f(A)‖, where ‖ · ‖ is the matrix norm induced by the Euclidean vector norm. Since neither f(A) nor f(A)v can be computed exactly, we introduce and analyze an inexa...
متن کاملMatrix Functions
1. Introduction. In this chapter, we give an overview on methods to compute functions of a (usually square) matrix A with particular emphasis on the matrix exponential and the matrix sign function. We will distinguish between methods which indeed compute the entire matrix function, i.e. they compute a matrix, and those which compute the action of the matrix function on a vector. The latter task...
متن کاملNew conditions for non-stagnation of minimal residual methods
In the solution of large linear systems, a condition guaranteeing that a minimal residual Krylov subspace method makes some progress, i.e., that it does not stagnate, is that the symmetric part of the coefficient matrix be positive definite. This condition results in a well-established worst-case bound for the convergence rate of the iterative method, due to Elman. This bound has been extensive...
متن کامل